nlp_architect.models.memn2n_dialogue.MemN2N_Dialog

class nlp_architect.models.memn2n_dialogue.MemN2N_Dialog(batch_size, vocab_size, sentence_size, memory_size, embedding_size, num_cands, max_cand_len, hops=3, max_grad_norm=40.0, nonlin=None, initializer=<tensorflow.python.ops.init_ops.RandomNormal object>, optimizer=<tensorflow.python.training.adam.AdamOptimizer object>, session=<tensorflow.python.client.session.Session object>, name='MemN2N_Dialog')[source]

End-To-End Memory Network.

__init__(batch_size, vocab_size, sentence_size, memory_size, embedding_size, num_cands, max_cand_len, hops=3, max_grad_norm=40.0, nonlin=None, initializer=<tensorflow.python.ops.init_ops.RandomNormal object>, optimizer=<tensorflow.python.training.adam.AdamOptimizer object>, session=<tensorflow.python.client.session.Session object>, name='MemN2N_Dialog')[source]

Creates an End-To-End Memory Network for Goal Oriented Dialog

Parameters
  • cands – Encoded candidate answers

  • batch_size – The size of the batch.

  • vocab_size – The size of the vocabulary (should include the nil word). The nil word

  • encoding should be 0. (one-hot) –

  • sentence_size – The max size of a sentence in the data. All sentences should be padded

  • this length. If padding is required it should be done with nil one-hot encoding (to) –

  • memory_size – The max size of the memory. Since Tensorflow currently does not support

  • arrays all memories must be padded to this length. If padding is required, the (jagged) –

  • memories should be (extra) –

  • memories; memories filled with the nil word (empty) –

  • embedding_size – The size of the word embedding.

  • hops – The number of hops. A hop consists of reading and addressing a memory slot.

  • to 3. (Defaults) –

  • max_grad_norm – Maximum L2 norm clipping value. Defaults to 40.0.

  • nonlin – Non-linearity. Defaults to None.

  • initializer – Weight initializer. Defaults to

  • `tf.random_normal_initializer

  • optimizer – Optimizer algorithm used for SGD. Defaults to

  • `tf.train.AdamOptimizer

  • session – Tensorflow Session the model is run with. Defaults to tf.Session().

  • name – Name of the End-To-End Memory Network. Defaults to MemN2N.

Methods

__init__(batch_size, vocab_size, …[, …])

Creates an End-To-End Memory Network for Goal Oriented Dialog

batch_fit(stories, queries, answers, cands)

Runs the training algorithm over the passed batch

predict(stories, queries, cands)

Predicts answers as one-hot encoding.

batch_fit(stories, queries, answers, cands)[source]

Runs the training algorithm over the passed batch

Parameters
  • stories – Tensor (None, memory_size, sentence_size)

  • queries – Tensor (None, sentence_size)

  • answers – Tensor (None, vocab_size)

Returns

floating-point number, the loss computed for the batch

Return type

loss

predict(stories, queries, cands)[source]

Predicts answers as one-hot encoding.

Parameters
  • stories – Tensor (None, memory_size, sentence_size)

  • queries – Tensor (None, sentence_size)

Returns

Tensor (None, vocab_size)

Return type

answers